63 research outputs found

    Interpretability of machine learning solutions in public healthcare : the CRISP-ML approach

    Get PDF
    Public healthcare has a history of cautious adoption for artificial intelligence (AI) systems. The rapid growth of data collection and linking capabilities combined with the increasing diversity of the data-driven AI techniques, including machine learning (ML), has brought both ubiquitous opportunities for data analytics projects and increased demands for the regulation and accountability of the outcomes of these projects. As a result, the area of interpretability and explainability of ML is gaining significant research momentum. While there has been some progress in the development of ML methods, the methodological side has shown limited progress. This limits the practicality of using ML in the health domain: the issues with explaining the outcomes of ML algorithms to medical practitioners and policy makers in public health has been a recognized obstacle to the broader adoption of data science approaches in this domain. This study builds on the earlier work which introduced CRISP-ML, a methodology that determines the interpretability level required by stakeholders for a successful real-world solution and then helps in achieving it. CRISP-ML was built on the strengths of CRISP-DM, addressing the gaps in handling interpretability. Its application in the Public Healthcare sector follows its successful deployment in a number of recent real-world projects across several industries and fields, including credit risk, insurance, utilities, and sport. This study elaborates on the CRISP-ML methodology on the determination, measurement, and achievement of the necessary level of interpretability of ML solutions in the Public Healthcare sector. It demonstrates how CRISP-ML addressed the problems with data diversity, the unstructured nature of data, and relatively low linkage between diverse data sets in the healthcare domain. The characteristics of the case study, used in the study, are typical for healthcare data, and CRISP-ML managed to deliver on these issues, ensuring the required level of interpretability of the ML solutions discussed in the project. The approach used ensured that interpretability requirements were met, taking into account public healthcare specifics, regulatory requirements, project stakeholders, project objectives, and data characteristics. The study concludes with the three main directions for the development of the presented cross-industry standard process

    The new norm : Computer Science conferences respond to COVID-19

    Get PDF
    The disruption from COVID-19 has been felt deeply across all walks of life. Similarly, academic conferences as one key pillar of dissemination and interaction around research and development have taken a hit. We analyse an interesting focal point as to how conferences in the area of Computer Science have reacted to this disruption with respect to their mode of offering and registration prices, and whether their response is contingent upon specific factors such as where the conference was to be hosted, its ranking, its publisher or its original scheduled date. To achieve this, we collected metadata associated with 170 conferences in the area of Computer Science and as a means of comparison; 25 Psychology conferences. We show that conferences in the area of Computer Science have demonstrated agility and resilience by progressing to an online mode due to COVID-19 (approximately 76% of Computer Science conferences moved to an online mode), many with no changes in their schedule, particularly those in North America and those with a higher ranking. Whilst registration fees have lowered by an average of 42% due to the onset of COVID-19, conferences still have to facilitate attendance on a large scale due to the logistics and costs involved. In conclusion, we discuss the implications of our findings and speculate what they mean for conferences, including those in Computer Science, in the post-COVID-19 world

    Smart scalable ML-blockchain framework for large-scale clinical information sharing

    Get PDF
    Large-scale clinical information sharing (CIS) provides significant advantages for medical treatments, including enhanced service standards and accelerated scheduling of health services. The current CIS suffers many challenges such as data privacy, data integrity, and data availability across multiple healthcare institutions. This study introduces an innovative blockchain-based electronic healthcare system that incorporates synchronous data backup and a highly encrypted data-sharing mechanism. Blockchain technology, which eliminates centralized organizations and reduces the number of fragmented patient files, could make it easier to use machine learning (ML) models for predictive diagnosis and analysis. In turn, it might lead to better medical care. The proposed model achieved an improved patient-centered CIS by personalizing the separation of information with an intelligent ”allowed list“ for clinician data access. This work introduces a hybrid ML-blockchain solution that combines traditional data storage and blockchain-based access. The experimental analysis evaluated the proposed model against the competing models in comparative and quantitative studies in large-scale CIS examples in terms of model viability, stability, protection, and robustness, with improved results

    Review of innovative immersive technologies for healthcare applications

    Get PDF
    Immersive technologies, including virtual reality (VR), augmented reality (AR), and mixed reality (MR), can connect people using enhanced data visualizations to better involve stakeholders as integral members of the process. Immersive technologies have started to change the research on multidimensional genomic data analysis for disease diagnostics and treatments. Immersive technologies are highlighted in some research for health and clinical needs, especially for precision medicine innovation. The use of immersive technology for genomic data analysis has recently received attention from the research community. Genomic data analytics research seeks to integrate immersive technologies to build more natural human-computer interactions that allow better perception engagements. Immersive technologies, especially VR, help humans perceive the digital world as real and give learning output with lower performance errors and higher accuracy. However, there are limited reviews about immersive technologies used in healthcare and genomic data analysis with specific digital health applications. This paper contributes a comprehensive review of using immersive technologies for digital health applications, including patient-centric applications, medical domain education, and data analysis, especially genomic data visual analytics. We highlight the evolution of a visual analysis using VR as a case study for how immersive technologies step, can by step, move into the genomic data analysis domain. The discussion and conclusion summarize the current immersive technology applications’ usability, innovation, and future work in the healthcare domain, and digital health data visual analytics

    Child-Custody Reform and Marriage-Specific Investment in Children

    Get PDF
    Research on child custody primarily focuses on the well-being of children following divorce. We extend this literature by examining how the prospect of joint child custody affects marriage-specific investment in children’s private-school education. Variation in the timing of joint-custody reforms across states proxies for the prospect of joint child custody and provides a natural experiment framework with which to examine marriage-specific investment in children. The probability of children’s private school attendance declines by 13 percent in states that adopt joint-custody laws. The effects of joint-custody reform are larger in states that have property-division laws that consistently favor one parent over the other. The results are largely robust for subsamples partitioned by socioeconomic status

    The CRISP-ML approach to handling causality and interpretability issues in machine learning

    No full text
    Interpretability in machine learning projects and one of its aspects - causal inference - have recently gained significant interest and focus. Due to the recent rapid appearance of frameworks, methods, algorithms and software most of which are in early stages of their development, it can be confusing for practitioners and researchers involved in a machine learning project to choose the best approach and set of techniques that would efficiently deliver valid insights while minimising the known risks of failure of data-related projects. CRISP-ML process methodology minimises this confusion by outlining a clear step-by-step process that explicitly treats of interpretability issues through every stage. The paper presents an update of CRISP-ML, which incorporates causality in a similar way and supports formalisation, design and implementation of specific instances of CRISP-ML process, subject to required levels of interpretability and causality of results. The approach is demonstrated on examples from the domains of credit risk, public health and healthcare

    AI elegance and ethics : just married?

    Get PDF
    The following paper is dedicated to the 21st century's “recent marriage” between the aesthetics of beauty and elegance on the one hand, and the ethics of choice on the other, involved in the “humanizing mission” of AI digital assistance. In the context of the 4.0 Social Revolution it will be shown how modern aesthetic concepts of AI design can go hand in hand with the ethics of choice, because of their inherent connection, backtracked in earlier moments of Euro-pean history, and expanded around the French Revolution in Schiller's 27 letters On Aesthetic Education of Man (1795) as well as earlier in Leibniz holistic aesthetics. Relevant arguments will be discussed here to disclose the “secrets” of how the inherent connection is to be carried out within the metaphysical background of faith, even if modern 20th century attitude has seemingly dismissed its “philosophical burden” during the late 1970-ies. In 2011 a “New Aesthetics” was introduced without the burden of metaphysics, aiming to create a new “lens” for the perception of elegance, simplicity and clarity by young “digitally naive” Avantgarde artists. However, elegance and beauty have previously been claimed by algorithmic solutions, starting with Leibniz and Condorcet, which gave birth to the 20th century computational aesthetics. The social life of AI algorithms - we cannot perceive them as humans - seems mostly intended to optimize corporate solutions. But the 21st century artists are about to take their chances on creating something new that makes us feel artificial intelligence as an integral part of a beautiful mind. AI algorithms can offer smart solutions, but the wisdom of choice has to remain a human call

    Believable agents build relationships on the web

    No full text
    In this paper we present the Believable Negotiator - the formalism behind a Web business negotiation technology that treats relationships as a commodity. It supports relationship building, maintaining, evolving, and passing to other agents, and utilises such relationships in agent interaction. The Believable Negotiator also takes in account the "relationship gossip" - the information, supplied by its information providing agents, about the position of respective agents in their networks of relationships beyond the trading space. It is embodied in a 3D web space, that is translated to different virtual worlds platforms, enabling the creation of an integrated 3D trading space, geared for Web 3.0

    Power conservation in wired wireless networks

    No full text
    A joint university / industry collaboration has designed a system for conserving power in LTE (Long Term Evolution) wireless networks for mobile devices such as phones. The solution may be applied to any wireless technology in which all stations are wired to a backbone (e.g. it may not be applied to an 802.11 mash). This paper describes the solution method that is based on a distributed multiagent system in which one agent is associated with each and every station. Extensive simulations show that the system delivers robust performance: the computational overhead is within acceptable limits, the solution is stable in the presence of unexpected fluctuations in demand patterns, and scalability is achieved by the agents making decisions locally

    Second order probabilistic models for within-document novelty detection in academic articles

    No full text
    It is becoming increasingly difficult to stay aware of the state-of-the- art in any research field due to the exponential increase in the number of academic publications. This problem effects authors and reviewers of submissions to academic journals and conferences, who must be able to identify which portions of an article are novel and which are not. Therefore, having a process to automatically judge the flow of novelty though a document would assist academics in their quest for truth. In this article, we propose the concept of Within Document Novelty Location, a method of identifying locations of novelty and non-novelty within a given document. In this preliminary investigation, we examine if a second order statistical model has any benefit, in terms of accuracy and confidence, over a simpler first order model. Experiments on 928 text sequences taken from three academic articles showed that the second order model provided a significant increase in novelty location accuracy for two of the three documents. There was no significant difference in accuracy for the remaining document, which is likely to be due to the absence of context analysis
    corecore